Linear Classifiers Under Infinite Imbalance

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Linear classifiers

Above, w ∈ Rd is a vector of real-valued weights, which we call a weight vector, and θ ∈ R is a threshold value. The weight vector (assuming it is non-zero) is perpendicular to a hyperplane of dimension that passes through the point wθ/‖w‖2; this hyperplane separates the points x ∈ Rd that are classified as +1 from those that are classified as −1 by fw,θ. Homogeneous half-space functions are ha...

متن کامل

Robust linear semi-infinite programming duality under uncertainty

In this paper, we propose a duality theory for semi-infinite linear programming problems under uncertainty in the constraint functions, the objective function, or both, within the framework of robust optimization. We present robust duality by establishing strong duality between the robust counterpart of an uncertain semi-infinite linear program and the optimistic counterpart of its uncertain La...

متن کامل

Cascading Asymmetric Linear Classifiers

Motivation: Combinations of classifiers have been found useful empirically, yet there is no formal proof of their generalization ability. Our goal is to develop an algorithm to train a sequence of linear classifiers yielding a nonlinear decision surface. We believe that choosing asymmetric regularization parameters for each class can yield a sequence of classifiers that approximates arbitrarily...

متن کامل

Bagging for linear classifiers

Classifiers built on small training sets are usually biased or unstable. Different techniques exist to construct more stable classifiers. It is not clear which ones are good, and whether they really stabilize the classifier or just improve the performance. In this paper bagging (bootstrapping and aggregating (1)) is studied for a number of linear classifiers. A measure for the instability of cl...

متن کامل

Learning Mixtures of Linear Classifiers

We consider a discriminative learning (regression) problem, whereby the regression function is a convex combination of k linear classifiers. Existing approaches are based on the EM algorithm, or similar techniques, without provable guarantees. We develop a simple method based on spectral techniques and a ‘mirroring’ trick, that discovers the subspace spanned by the classifiers’ parameter vector...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SSRN Electronic Journal

سال: 2021

ISSN: 1556-5068

DOI: 10.2139/ssrn.3863653